UK Regulators’ Path for AI Starts With Auditing Algorithms

The Digital Regulation Cooperation Forum (DRCF), a group of four U.K. regulators, published last week, April 28 two documents where it provided businesses with guidance about the benefits and risks of artificial intelligence (AI) and machine learning (ML) and how to audit algorithms. 

The United Kingdom, like many other countries around the world, is developing a new regulatory framework to govern the use of AI and ML. But unlike other governments such as the European Union which has drafted a law and it is already discussing it in the Parliament, the U.K. is taking a slower, more comprehensive, approach to AI regulation. 

In 2021, the U.K. government published a National AI Strategy setting out its plan to become an AI “superpower.” This strategy included the creation of a National AI research and innovation program, an AI Standards Hub (to establish or collaborate in the development of international standards) and the modification of existing laws to include AI. The government could also publish later this year an AI white paper, a prerequisite step to the development of a law. 

However, the DRCF may have started to help in the design of this AI framework. One unique feature of this joint regulator is that it is planning to use the capabilities and powers of each individual member, the Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA), the Information Commissioners’ Office (IC) and the telecom regulator (Ofcom), to provide a common response to stakeholders — in other words, one unified message from regulators. This may be particularly useful for businesses as more and more regulators are publishing guidance with recommendations without realizing the impact that some actions may have in other regulators’ areas. For instance, more algorithmic transparency could provide greater competition, but it could also raise data protection issues. This approach proposed by DRCF will likely reduce the tension between certain areas in AI and ML while providing useful advice for companies. 

This cross-agency collaboration won’t be limited to the development of guidelines — the DRCF is planning to use one agency’s resources to the benefit of the other agencies, if needed. For example, the FCA’s regulatory sandbox allows firms to test products and services in a controlled environment, many of which use it to develop algorithms. The ICO also has its own sandbox. The DRCF will explore alternatives for other DRCF members to use these sandboxes to test products and services from the companies they oversee. 

But one of the first areas where the DRCF is seeking input from the public and may take coordinated action is auditing algorithms. According to the report published by DRCF, there are a number of issues. First, there is lack of effective governance in the auditing ecosystem, including a lack of clarity around the standards that auditors should be auditing against and around what good auditing and outcomes look like. Second, sometimes it may be difficult for some auditors, such as academics or civil society bodies, to access algorithmic systems to scrutinize them effectively. Third, there are insufficient avenues for those impacted by algorithmic processing to seek redress, and it is important for regulators to ensure action is taken to remedy harms that have been surfaced by audits. 

The DRCF is evaluating what the next steps should be, and the paper includes six different proposals with different levels of regulatory intervention, from clarifying how external audits should be conducted to providing guidance, assisting standard-setting authorities to convert regulatory requirements into testable criteria for audit, or accrediting organizations to carry out audits. The public consultation will remain open until June 8. 

Read More: AI in Financial Services in 2022: US, EU and UK Regulation